-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[benchmarks] Add no-skip
argument.
#6484
Conversation
# 2. the model is not compatible with the experiment configuration | ||
# | ||
# Otherwise, we should go ahead and execute it. | ||
if (not self._args.no_skip and not self.model_loader.is_compatible( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't we need to check for no_skip
in line 99
? hoping to save some time not loading a model we don't intend to run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can do that. We still have to "load the model" for updating the environment variables of the child process (which will actually execute the benchmark). That said, this loading won't take many cycles, since we are calling with dummy=True
: "instantiate the wrapper BenchmarkModel
, but don't instantiate the actual torchbench model."
Thanks, by the way, I think we can remove |
@zpcore Sure. I will open a new PR doing that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This PR adds and argument for not skipping models.
cc @miladm